Colombia COVID-19 - Central region
Our project
We decided to do central Colombia, basically because it is where the capital is.
We built a model for the number of confirmed cases using all the others covariates (plus some we created) and we estimated the predictive accuracy of our selected model.
We decided to consider as central Colombia the following departments/districts: Bogotà DC, Boyacá, Tolima, Cundinamarca, Meta, Quindío, Valle del Cauca, Risaralda, Celdas, Boyacá, Antioquia, Santander, Casanare.
Loading the dataset
This dataset is missing completely the department Valle del Cauca!
colombia_covid <- as.data.frame(read_csv("data/covid19co.csv"))
cols <- colnames(colombia_covid)[c(1, 4, 5, 6, 7, 8, 9, 11, 14)]
colombia_covid <- colombia_covid[cols]
colombia_covid <- colombia_covid[, c(1, 9, 2, 3, 4, 5, 6, 7, 8)]
colnames(colombia_covid) <- c("ID de caso", "Fecha de diagnóstico", "Ciudad de ubicación", "Departamento o Distrito", "Atención", "Edad" , "Sexo", "Tipo", "País de procedencia")
#colombia_covid$`Departamento o Distrito`[which(colombia_covid$`Departamento o Distrito` == "Valle Del Cauca")] <- "Valle del Cauca"
central.colombia.dep <- c("Bogotá D.C.", "Tolima", "Cundinamarca", "Meta", "Boyacá", "Quindío", "Cauca",
"Valle del Cauca", "Risaralda", "Caldas", "Boyacá", "Antioquia", "Santander", "Casanare")
central.colombia.rows <- which(colombia_covid$`Departamento o Distrito` %in% central.colombia.dep)
colombia_covid <- colombia_covid[central.colombia.rows, ]
colombia_covid <- colombia_covid[-which(colombia_covid$`Fecha de diagnóstico`== "-"), ]Description of variables
ID de caso: ID of the confirmed case.
Fecha de diagnóstico: Date in which the disease was diagnosed.
Ciudad de ubicación: City where the case was diagnosed.
Departamento o Distrito: Department or district where the city belongs to.
Atención: Situation of the patient: recovered, at home, at the hospital, at the ICU or deceased.
Edad: Age of the confirmed case.
Sexo: Sex of the confirmed case.
Tipo: How the person got infected: in Colombia, abroad or unknown.
País de procedencia: Country of origin if the person got infected abroad.
Map
Here we can see our selected cities. The color of the pins is related with the number of cases: if they are less than \(10\) the color is “green”, if they are less than \(100\) the color is “orange”, otherwise it is “red”.
Preprocessing
We had to clean the dataset:
We transformed the
Fecha de diagnósticovariable into aDatetype variable,we fixed the variable
Id de caso(since we removed some departments, so some lines, the numbers weren’t consecutive),we created a variable
Grupo de edad,we cleaned the column
País de procedencia(replaced cities with the country) and created the variableContinente de procedencia(as the first is too fragmented we thought to consider the continents).
## ID de caso Fecha de diagnóstico Ciudad de ubicación
## 1 1 2020-03-06 Bogotá D.C.
## 2 2 2020-03-09 Medellín
## 3 3 2020-03-11 Medellín
## Departamento o Distrito Atención Edad Sexo Tipo
## 1 Bogotá D.C. Recuperado 19 F Importado
## 2 Antioquia Recuperado 50 F Importado
## 3 Antioquia Recuperado 55 M Relacionado
## País de procedencia Grupo de edad
## 1 Italia 19_30
## 2 España 46_60
## 3 Nan 46_60
New dataset I
## Date Elapsed time New cases/day Cumulative cases
## 1 2020-03-06 0 1 1
## 2 2020-03-09 3 1 2
## 3 2020-03-11 5 5 7
## 4 2020-03-12 6 2 9
## 5 2020-03-13 7 3 12
## 6 2020-03-14 8 14 26
## 7 2020-03-15 9 13 39
## 8 2020-03-16 10 8 47
## 9 2020-03-17 11 13 60
## 10 2020-03-18 12 9 69
New dataset II
## Date Elapsed time Department Department ID New cases/day
## 1 2020-03-09 3 Antioquia 1 1
## 2 2020-03-11 5 Antioquia 1 3
## 3 2020-03-14 8 Antioquia 1 3
## 4 2020-03-15 9 Antioquia 1 1
## 5 2020-03-19 13 Antioquia 1 3
## 6 2020-03-20 14 Antioquia 1 11
## 7 2020-03-21 15 Antioquia 1 3
## 8 2020-03-22 16 Antioquia 1 5
## 9 2020-03-23 17 Antioquia 1 22
## 10 2020-03-25 19 Antioquia 1 8
## Cumulative cases/Department Mean age
## 1 1 50.00000
## 2 4 35.66667
## 3 7 30.00000
## 4 8 55.00000
## 5 11 52.33333
## 6 22 39.81818
## 7 25 31.00000
## 8 30 45.40000
## 9 52 36.45455
## 10 60 29.75000
Exploring the dataset
Scattered infos about pandemic in Colombia (https://en.wikipedia.org/wiki/COVID-19_pandemic_in_Colombia):
the quarantine started on the 20th of March, since our data are from 6th of March to 2nd of April, it is very likeliy that quarantine effects are not witnessed in our data.
on March the 26th there was a damage in the machine that prepared the samples for processing and subsequent diagnosis of COVID-19, which affected the speed at which results were being produced. This could explain the very low number of confirmed cases.
The previous plot represents the daily incidence of the desease across all the departments we are taking into account.
Other plots
The major number of cases are in the capital Bogotà.
Here the growth seems exponential (and this is consistent with the fact that we are studying the early stages of the outbreak).
brks <- seq(-250, 250, 50)
lbls <- as.character(c(seq(-250, 0, 50), seq(50, 250, 50)))
ggplot(data=colombia_covid, aes(x=`Departamento o Distrito`, fill = Sexo)) +
geom_bar(data = subset(colombia_covid, Sexo == "F")) +
geom_bar(data = subset(colombia_covid, Sexo == "M"), aes(y=..count..*(-1))) +
scale_y_continuous(breaks = brks,
labels = lbls) +
coord_flip() +
labs(title="Spread of the desease across genders",
y = "Number of cases",
x = "Department",
fill = "Gender") +
theme_tufte() +
theme(plot.title = element_text(hjust = .5),
axis.ticks = element_blank()) +
scale_fill_brewer(palette = "Dark3") The desease (number of cases) is more or less equally distributed across genders.
#compute percentage so that we can label more precisely the pie chart
age_groups_pie <- colombia_covid %>%
group_by(`Grupo de edad`) %>%
count() %>%
ungroup() %>%
mutate(per=`n`/sum(`n`)) %>%
arrange(desc(`Grupo de edad`))
age_groups_pie$label <- scales::percent(age_groups_pie$per)
age_pie <- ggplot(age_groups_pie, aes(x = "", y = per, fill = factor(`Grupo de edad`))) +
geom_bar(stat="identity", width = 1) +
theme(axis.line = element_blank(),
plot.title = element_text(hjust=0.5)) +
labs(fill="Age groups",
x=NULL,
y=NULL,
title="Distribution of the desease across ages") +
coord_polar(theta = "y") +
#geom_text(aes(x=1, y = cumsum(per) - per/2, label=label))
geom_label_repel(aes(x=1, y=cumsum(per) - per/2, label=label), size=3, show.legend = F, nudge_x = 0) +
guides(fill = guide_legend(title = "Group"))
age_pie People from 31 to 45 years old are the most affected by the disease and people over 76 years old are the least affected. Colombia is a very young country. In 2018 the median age of the population was 30.4 years old and the largest age group is made of people from 25 to 54 years old, which comprises 41.98% of the population. (https://www.indexmundi.com/colombia/demographics_profile.html)
Age-Sex plot
There isn’t much difference between the sexes among the different group of ages.
Tipo plot
theme_set(theme_classic())
ggplot(colombia_covid, aes(x = `Fecha de diagnóstico`)) +
scale_fill_brewer(palette = "Set3") +
geom_histogram(aes(fill=Tipo), width = 0.8, stat="count") +
theme(axis.text.x = element_text(angle=65, vjust=0.6)) +
labs(title = "Daily number of confirmed cases",
subtitle = "subdivided across type",
x = "Date of confirmation",
fill = "Type")Tipo
I think that en estudio means that it is not clear while the case is imported or not, however it seems like there are more imported cases, we can count them:
type_pie <- colombia_covid %>%
group_by(Tipo) %>%
count() %>%
ungroup() %>%
mutate(per=`n`/sum(`n`)) %>%
arrange(desc(Tipo))
type_pie$label <- scales::percent(type_pie$per)
type_pie<-type_pie[names(type_pie)!="per"]
colnames(type_pie)<-c("Tipo", "Total number", "Percentage")
type_pie## # A tibble: 3 x 3
## Tipo `Total number` Percentage
## <chr> <int> <chr>
## 1 Relacionado 8570 12.3%
## 2 Importado 687 1.0%
## 3 En Estudio 60406 86.7%
The majority of the cases in the country are people that got infected inside Colombia. Then, people that contracted the disease abroad came mainly from Europe, followed by North America and Central America.
Correlation between the categorical variables
We used Cramer’s V to calculate the correlation between our categorical variables.
library(data.table) # data mgmt
library(gtools) # combination
library(plotly) # interactive graphics
cramervdata <- colombia_covid %>% select(`Ciudad de ubicación`,`Departamento o Distrito`,Atención,Sexo,Tipo,`País de procedencia`,`Grupo de edad`)
cramervdata[] <- lapply(cramervdata, factor)
cramervdata <- as.data.table(cramervdata)
cat_var <- colnames(cramervdata)
# Function to compute Cramer's V
# https://www.r-bloggers.com/example-8-39-calculating-cramers-v/
cv.test = function(x,y) {
CV = sqrt(chisq.test(x, y, correct=FALSE)$statistic /
(length(x)[1] * (min(length(unique(x))[1],length(unique(y))[1]) - 1)))
return(as.numeric(CV))
}
# Apply the function to the combination of categorical variable
v_cramer_all <- function(cat_var, df){
cat_var_grid <- data.table(combinations(n = length(cat_var), r = 2, v = cat_var, repeats.allowed = FALSE))
do.call(rbind,
apply(cat_var_grid, 1, function(x){
tmp <- as.character(x)
vec1 <- unlist(df[,tmp[1], with = FALSE])
vec2 <- unlist(df[,tmp[2], with = FALSE])
data.table(
variable_x = tmp[1],
variable_y = tmp[2],
chi2 = chisq.test(x = vec1, vec2, correct=FALSE)$p.value,
v_cramer = cv.test(x = vec1, y = vec2)
)
}))
}
results <- v_cramer_all(cat_var = cat_var, df = cramervdata)
#Reducing number of decimals of variable v_cramer
results <- results %>%
mutate_if(is.numeric, round, digits = 2)
# Heatmap vizualisation with ggplot2 -------------------------------------
g <- ggplot(results, aes(variable_x, variable_y)) +
geom_tile(aes(fill = v_cramer), colour = "black") +
theme(axis.text.x=element_text(angle=45, hjust=1)) +
scale_fill_gradient(low = "white", high = "steelblue") +
theme_bw() + xlab(NULL) + ylab(NULL) +
theme(axis.text.x=element_text(angle = -90, hjust = 0)) +
ggtitle("Cramer's V heatmap")+
geom_text(aes(label=v_cramer))
gThe frequentist approach
Train/test split
We splitted the data so to leave out the last three points for prediction, because we have few points and because in this models it has no sense to predict a week, because the situation changes really fast.
Poisson
Poisson with Elapsed time as predictor
poisson1 <- glm(`Cumulative cases` ~ `Elapsed time`, data=data1[1:120, ], family=poisson)
plot(poisson1, which=1)pred.pois1 <- poisson1$fitted.values
res.st1 <- (data1$`Cumulative cases`[1:120] - pred.pois1)/sqrt(pred.pois1)
#n=120, k=2, n-k=118
print(paste("Estimated overdispersion", sum(res.st1^2)/118))
poisson1.pred <- predict(poisson1, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson1.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson1$residuals^2))
#print(sprintf("MSE: %0.2f", sum(poisson1$residuals^2)/poisson1$df.residual))
#print(sprintf("MSE: %0.2f", anova(poisson1)['Residuals', 'Mean Sq']))
paste("AIC:", poisson1$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson1$null.deviance, deviance(poisson1)), 2))## [1] "Estimated overdispersion 126.060786858437"
## [1] "RMSE: 1760.50922940986"
## [1] "AIC: 21915.0291925917"
## [1] "Null deviance: 1740689.09" "Residual deviance: 20716.39"
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0416666666666667"
Using cases_relev_dep
poisson1A <- glm(`Cumulative cases/Department` ~ `Elapsed time`, data=cases_relev_dep, family=poisson)
plot(poisson1, which=1)pred.pois1A <- poisson1A$fitted.values
res.st1A <- (data1$`Cumulative cases`[1:120] - pred.pois1A)/sqrt(pred.pois1A)
#n=120, k=2, n-k=118
print(paste("Estimated overdispersion", sum(res.st1A^2)/118))
poisson1A.pred <- predict(poisson1A, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson1A.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson1A$residuals^2))
paste("AIC:", poisson1A$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson1A$null.deviance, deviance(poisson1A)), 2))## [1] "Estimated overdispersion 3889298.0739339"
## [1] "RMSE: 55854.604369896"
## [1] "AIC: 4411977.61248341"
## [1] "Null deviance: 5910917.82" "Residual deviance: 4404272.89"
We can see that the AIC is enormous.
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0"
Using New cases/day
poisson1B <- glm(`New cases/day` ~ `Elapsed time`, data=cases, family=poisson)
#print(paste("Estimated overdispersion", sum(res.st^2)/23))
plot(poisson1B, which=1)poisson1B.pred <- predict(poisson1B, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson1B.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson1B$residuals^2))
paste("AIC:", poisson1B$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson1B$null.deviance, deviance(poisson1B)), 2))## [1] "RMSE: 58646.54619116"
## [1] "AIC: 8427.11326445979"
## [1] "Null deviance: 95891.85" "Residual deviance: 7519.3"
Predictive accuracy of the Poisson model for New cases/day
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0416666666666667"
Poisson with Elapsed time plus Elapsed time^2 as predictor
poisson1C <- glm(`Cumulative cases` ~ `Elapsed time` + I(`Elapsed time`^2), data=cases[1:120, ], family=poisson)
pred.pois1C <- poisson1C$fitted.values
res.st1C <- (cases$`Cumulative cases`[1:120] - pred.pois1C)/sqrt(pred.pois1C)
#n=120, k=3, n-k=117
print(paste("Estimated overdispersion", sum(res.st1C^2)/117))
poisson1C.pred <- predict(poisson1C, newdata = cases[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson1C.pred - cases$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson1$residuals^2))
#print(sprintf("MSE: %0.2f", sum(poisson1$residuals^2)/poisson1$df.residual))
#print(sprintf("MSE: %0.2f", anova(poisson1)['Residuals', 'Mean Sq']))
paste("AIC:", poisson1C$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson1C$null.deviance, deviance(poisson1C)), 2))
plot(poisson1C, which=1)## [1] "Estimated overdispersion 75.2646213768741"
## [1] "RMSE: 6940.37335779742"
## [1] "AIC: 12062.7254007421"
## [1] "Null deviance: 1740689.09" "Residual deviance: 10862.08"
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0"
Poisson with Elapsed time plus Sexo
poisson2 <- glm(`Cumulative cases` ~ `Elapsed time` + Sexo_M, data=data1[1:120, ], family=poisson)
poisson2.pred <- predict(poisson2, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson2.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson2$residuals^2))
paste("AIC:", poisson2$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson2$null.deviance, deviance(poisson2)), 2))
plot(poisson2, which=1)## [1] "RMSE: 3892.80826581986"
## [1] "AIC: 20887.674847589"
## [1] "Null deviance: 1740689.09" "Residual deviance: 19687.03"
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0416666666666667"
Poisson with Elapsed time plus Group de edad
poisson3 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1[1:120, ], family=poisson)
pred.pois3 <- poisson3$fitted.values
res.st3 <- (data1$`Cumulative cases` - pred.pois3)/sqrt(pred.pois3)
#n=120, k=7, n-k=113
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st3^2)/113))
poisson3.pred <- predict(poisson3, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson3.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson3$residuals^2))
paste("AIC:", poisson3$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson3$null.deviance, deviance(poisson3)), 2))
plot(poisson3, which=1)## [1] "Estimated overdispersion 439618.33736315"
## [1] "RMSE: 3249.31716470375"
## [1] "AIC: 20650.0253613547"
## [1] "Null deviance: 1740689.09" "Residual deviance: 19441.38"
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0333333333333333"
Poisson with Elapsed time, Age and Departments as predictors
poisson4 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima`, data=data1[1:120, ], family=poisson)
plot(poisson4, which=1)pred.pois4 <- poisson4$fitted.values
res.st4 <- (data1$`Cumulative cases` - pred.pois4)/sqrt(pred.pois4)
#n=120, k=17, n-k=103
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st4^2)/103))
poisson4.pred <- predict(poisson4, newdata = data1[120:126, ], type="response")
#paste("Real: ", data1$`Cumulative cases`[120:126], "Predict: ", poisson4.pred)
paste("RMSE:", sqrt(mean((poisson4.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson4$residuals^2))
paste("AIC:", poisson4$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson4$null.deviance, deviance(poisson4)), 2))## [1] "Estimated overdispersion 542039.960916532"
## [1] "RMSE: 8243.70282563781"
## [1] "AIC: 18085.2709276108"
## [1] "Null deviance: 1740689.09" "Residual deviance: 16854.63"
Predictive accuracy of the Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0583333333333333"
Poisson with Elapsed time, Age and Departments as predictors for New cases/day
poisson4bis <- glm(`New cases/day` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima`, data=data1[1:120, ], family=poisson)
plot(poisson4bis, which=1)pred.pois4bis <- poisson4bis$fitted.values
res.st4bis <- (data1$`Cumulative cases` - pred.pois4bis)/sqrt(pred.pois4bis)
#n=120, k=18, n-k=102
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st4bis^2)/102))
poisson4bis.pred <- predict(poisson4bis, newdata = data1[120:126, ], type="response")
paste("RMSE:", sqrt(mean((poisson4bis.pred - data1$`New cases/day`[120:126])^2)))
#paste("MSE:", mean(poisson4$residuals^2))
paste("AIC:", poisson4bis$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson4bis$null.deviance, deviance(poisson4bis)), 2))## [1] "Estimated overdispersion 6719176.07903642"
## [1] "RMSE: 1225.26041309972"
## [1] "AIC: 2860.38819023372"
## [1] "Null deviance: 64369.08" "Residual deviance: 1979.14"
Predictive accuracy of the Poisson model for New cases/day
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.15"
Poisson with Elapsed time, Elapsed time^2, Age and Departments as predictors
poisson5 <- glm(`Cumulative cases` ~ `Elapsed time` + I(`Elapsed time`^2) + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima`, data=data1[1:120, ], family=poisson)
plot(poisson5, which=1)pred.pois5 <- poisson5$fitted.values
res.st5 <- (data1$`Cumulative cases` - pred.pois5)/sqrt(pred.pois5)
#n=120, k=19, n-k=101
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st5^2)/101))
poisson5.pred <- predict(poisson5, newdata = data1[120:126, ], type="response")
#paste("Real: ", data1$`Cumulative cases`[120:126], "Predict: ", poisson5.pred)
paste("RMSE:", sqrt(mean((poisson5.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson4$residuals^2))
paste("AIC:", poisson5$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson5$null.deviance, deviance(poisson5)), 2))## [1] "Estimated overdispersion 1238590.22360691"
## [1] "RMSE: 8441.28307878747"
## [1] "AIC: 8781.84768119021"
## [1] "Null deviance: 1740689.09" "Residual deviance: 7549.2"
Predictive accuracy of the Poisson model for Cumulative cases
## [1] "Frequency of coverage: 0.0333333333333333"
Autocorrelation to compare Poisson models
We generated 1000 samples from each of the four Poisson models and calculated the autocorrelation and compared against the autocorrelation of our original sample.
Autocorrelation for the models when the response variable is Cumulative Cases
#Poisson1
rpois1 <- simulate(poisson1,nsim=10000)
matrix1 <- data.matrix(rpois1)
acf1 <- lapply(split(matrix1,col(matrix1)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_1 <- c()
for(i in 1:10000){ Poisson_1 <- c(Poisson_1, acf1[[c(i,1,2)]])}
#Poisson1b -> this is the model poisson1C with elapsed time squared. Let's call it Poisson1b to keep an order in the autocorrelation plot
rpois1b <- simulate(poisson1C,nsim=10000)
matrix1b <- data.matrix(rpois1b)
acf1b <- lapply(split(matrix1b,col(matrix1b)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_1b <- c()
for(i in 1:10000){ Poisson_1b <- c(Poisson_1b, acf1b[[c(i,1,2)]])}
#Poisson2
rpois2 <- simulate(poisson2,nsim=10000)
matrix2 <- data.matrix(rpois2)
acf2 <- lapply(split(matrix2,col(matrix2)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_2 <- c()
for(i in 1:10000){ Poisson_2 <- c(Poisson_2, acf2[[c(i,1,2)]])}
#Poisson3
rpois3 <- simulate(poisson3,nsim=10000)
matrix3 <- data.matrix(rpois3)
acf3 <- lapply(split(matrix3,col(matrix3)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_3 <- c()
for(i in 1:10000){ Poisson_3 <- c(Poisson_3, acf3[[c(i,1,2)]])}
#Poisson4
rpois4 <- simulate(poisson4,nsim=10000)
matrix4 <- data.matrix(rpois4)
acf4 <- lapply(split(matrix4,col(matrix4)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_4 <- c()
for(i in 1:10000){ Poisson_4 <- c(Poisson_4, acf4[[c(i,1,2)]])}
#Poisson4b -> this is the model poisson5 with elapsed time squared. Let's call it Poisson4b to keep a pattern in the autocorrelation plot
rpois4b <- simulate(poisson5,nsim=10000)
matrix4b <- data.matrix(rpois4b)
acf4b <- lapply(split(matrix4b,col(matrix4b)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_4b <- c()
for(i in 1:10000){ Poisson_4b <- c(Poisson_4b, acf4b[[c(i,1,2)]])}
#autocorrelation dataframe
autocor <- data.frame(Poisson_1,Poisson_1b,Poisson_2,Poisson_3,Poisson_4,Poisson_4b)
library(reshape2)
x1 <- melt(autocor)
#autocorrelation of original sample
tmp <- acf(data1$`Cumulative cases`,lag.max=1,plot=F)
hline1 <- as.numeric((unlist(tmp[1]))[1])
#boxplot
theme_set(theme_gray())
p<-ggplot(x1, aes(x=variable, y=value, fill=variable)) +
geom_boxplot()+
labs(title="Sample Autocorrelation", x="Models", y="Autocorrelation",fill='Models')+
scale_fill_manual(values=c("#FF66FF", "#FFFF00", "#00CCFF", "green", "#6633CC", "#FF6633"))
p + geom_hline(aes(yintercept=hline1,linetype="Autocorrelation"),color="red",size=1)+
scale_linetype_manual(name = "Original sample",values = c(1, 1))Autocorrelation for the models when the response variable is New cases/day
#PoissonA -> This is the model poisson1B. The predictor is Elapsed time. I am changing the name to make easier to understand the presentation
rpoisA <- simulate(poisson1B,nsim=10000)
matrixA <- data.matrix(rpoisA)
acfA <- lapply(split(matrixA,col(matrixA)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_A <- c()
for(i in 1:10000){ Poisson_A <- c(Poisson_A, acfA[[c(i,1,2)]])}
#PoissonB <- This is the model poisson4bis. The predictors are elapsed time, age and departments. I am changing the name to make easier to understand the presentation.
rpoisB <- simulate(poisson4bis,nsim=10000)
matrixB <- data.matrix(rpoisB)
acfB <- lapply(split(matrixB,col(matrixB)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_B <- c()
for(i in 1:10000){ Poisson_B <- c(Poisson_B, acfB[[c(i,1,2)]])}
#autocorrelation dataframe
autocor <- data.frame(Poisson_A,Poisson_B)
library(reshape2)
x1 <- melt(autocor)
#autocorrelation of original sample
tmp2 <- acf(cases$`New cases/day`,lag.max=1,plot=F)
hline2 <- as.numeric((unlist(tmp[1]))[1])
#boxplot
theme_set(theme_gray())
p<-ggplot(x1, aes(x=variable, y=value, fill=variable)) +
geom_boxplot()+
labs(title="Sample Autocorrelation", x="Models", y="Autocorrelation",fill='Models')+
scale_fill_manual(values=c("#00CCFF", "green"))
p + geom_hline(aes(yintercept=hline2,linetype="Autocorrelation"),color="red",size=1)+
scale_linetype_manual(name = "Original sample",values = c(1, 1))Autocorrelation for the model when the response variable is Cumulative cases/Department
#Poisson Cumulative cases/Department -> This is the model poisson1A. The predictor is Elapsed time. I am changing the name to make easier to understand the presentation
rpoisd <- simulate(poisson1A,nsim=10000)
matrixd <- data.matrix(rpoisd)
acfd <- lapply(split(matrixd,col(matrixd)), function(ts) acf(ts, lag.max=1,plot=F))
Poisson_Cumulative_Cases_Per_Department <- c()
for(i in 1:10000){ Poisson_Cumulative_Cases_Per_Department <- c(Poisson_Cumulative_Cases_Per_Department, acfd[[c(i,1,2)]])}
#autocorrelation dataframe
autocor <- data.frame(Poisson_Cumulative_Cases_Per_Department)
library(reshape2)
x1 <- melt(autocor)
#autocorrelation of original sample
tmp3 <- acf(cases_relev_dep$`Cumulative cases/Department`,lag.max=1,plot=F)
hline3 <- as.numeric((unlist(tmp[1]))[1])
#boxplot
theme_set(theme_gray())
p<-ggplot(x1, aes(x=variable, y=value, fill=variable)) +
geom_boxplot()+
labs(title="Sample Autocorrelation", x="Models", y="Autocorrelation",fill='Models')+
scale_fill_manual(values=c("green"))
p + geom_hline(aes(yintercept=hline3,linetype="Autocorrelation"),color="red",size=1)+
scale_linetype_manual(name = "Original sample",values = c(1, 1))ANOVA to compare the Poisson models
## Analysis of Deviance Table
##
## Model 1: `Cumulative cases` ~ `Elapsed time`
## Model 2: `Cumulative cases` ~ `Elapsed time` + Sexo_M
## Model 3: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` +
## `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` +
## `Grupo de edad_76+`
## Model 4: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` +
## `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` +
## `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` +
## `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` +
## `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` +
## `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` +
## `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` +
## `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima`
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 118 20716
## 2 117 19687 1 1029.35 < 2.2e-16 ***
## 3 113 19441 4 245.65 < 2.2e-16 ***
## 4 102 16855 11 2586.75 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Quasi-Poisson
Quasi Poisson with Elapsed time as predictor
poisson1quasi <- glm(`Cumulative cases` ~ `Elapsed time`, data=data1[1:120, ], family=quasipoisson)
plot(poisson1quasi, which=1)pred.poisq <- poisson1quasi$fitted.values
res.stq <- (data1$`Cumulative cases` - pred.poisq)/sqrt(summary(poisson1quasi)$dispersion*pred.poisq)
#n=120, k= ?, n-k=?
print(paste("Estimated overdispersion", sum(res.stq^2)/23))
poisson1quasi.pred <- predict(poisson1quasi, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((poisson1quasi.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson1quasi$residuals^2))
paste("AIC:", poisson1quasi$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson1quasi$null.deviance, deviance(poisson1quasi)), 2))## [1] "Estimated overdispersion 15728.0601174826"
## [1] "RMSE: 1760.50922940986"
## [1] "AIC: NA"
## [1] "Null deviance: 1740689.09" "Residual deviance: 20716.39"
Predictive accuracy of the Quasi-Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.15"
Quasi Poisson with Elapsed time and Age as predictor
poisson2quasi <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1[1:120, ], family=quasipoisson)
plot(poisson1quasi, which=1)pred.poisq2 <- poisson2quasi$fitted.values
res.stq2 <- (data1$`Cumulative cases` - pred.poisq2)/sqrt(summary(poisson2quasi)$dispersion*pred.poisq2)
#n=120, k= ?, n-k=?
print(paste("Estimated overdispersion", sum(res.stq2^2)/18))
poisson2quasi.pred <- predict(poisson2quasi, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((poisson2quasi.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(poisson2quasi$residuals^2))
paste("AIC:", poisson2quasi$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(poisson2quasi$null.deviance, deviance(poisson2quasi)), 2))## [1] "Estimated overdispersion 21853.7140901657"
## [1] "RMSE: 3249.31716470375"
## [1] "AIC: NA"
## [1] "Null deviance: 1740689.09" "Residual deviance: 19441.38"
Predictive accuracy of the Quasi-Poisson model for Cumulative cases
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.325"
Negative Binomial
Negative Binomial with Elapsed time as predictor
#n=120, k=2, n-k=118
stdres <- rstandard(nb1)
print(paste("Estimated overdispersion", sum(stdres^2)/118))
nb1.pred <- predict(nb1, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((nb1.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(nb1$residuals^2))
paste("AIC:", nb1$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(nb1$null.deviance, deviance(nb1)), 2))## [1] "Estimated overdispersion 177.301055417492"
## [1] "RMSE: 1765.66108990545"
## [1] "AIC: 21911.1770489216"
## [1] "Null deviance: 1738722.15" "Residual deviance: 20710.41"
Predictive accuracy of the Negative Binomial model
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0416666666666667"
Negative Binomial with Elapsed time plus Age as predictors
nb2 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1[1:120, ])
plot(nb2, which=1)nb2.pred <- predict(nb2, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((nb2.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(nb2$residuals^2))
paste("AIC:", nb2$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(nb2$null.deviance, deviance(nb2)), 2))## [1] "RMSE: 3253.44941458765"
## [1] "AIC: 20645.2695856094"
## [1] "Null deviance: 1738975.56" "Residual deviance: 19434.52"
Predictive accuracy of the Negative Binomial model
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0333333333333333"
Negative Binomial with Elapsed time plus Department as predictors
nb3 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas`+`Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`, data=data1[1:120, ])
plot(nb3, which=1)nb3.pred <- predict(nb3, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((nb3.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(nb3$residuals^2))
paste("AIC:", nb3$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(nb3$null.deviance, deviance(nb3)), 2))## [1] "RMSE: 3335.37126373896"
## [1] "AIC: 19270.2478159666"
## [1] "Null deviance: 1739087.22" "Residual deviance: 18047.5"
Predictive accuracy of the Negative Binomial model
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0416666666666667"
Negative Binomial with Elapsed time plus Continent of origin as predictors
# nb4 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Continente de procedencia_Asia`+`Continente de procedencia_Centroamérica`+`Continente de procedencia_Colombia`+`Continente de procedencia_Europa`+`Continente de procedencia_Norteamérica`+`Continente de procedencia_Sudamerica`, data=data1[1:22, ])
# plot(nb4, which=1)
# nb4.pred <- predict(nb4, newdata = data1[23:25, ], type = "response")
# paste("RMSE:", sqrt(mean((nb4.pred - data1$`Cumulative cases`[23:25])^2)))
# #paste("MSE:", mean(nb4$residuals^2))
# paste("AIC:", nb4$aic)
# paste(c("Null deviance: ", "Residual deviance:"),
# round(c(nb4$null.deviance, deviance(nb4)), 2))Negative Binomial with Elapsed time, Age and Departments as pedictors
nb4 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.`+`Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas`+`Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`, data=data1[1:120, ])
plot(nb4, which=1)# Calculating overdispersion n=120 k=19 n-k=101
stdres <- rstandard(nb4)
print(paste("Estimated overdispersion", sum(stdres^2)/101))
nb4.pred <- predict(nb4, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((nb4.pred - data1$`Cumulative cases`[120:126])^2)))
#paste("MSE:", mean(nb5$residuals^2))
paste("AIC:", nb4$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(nb4$null.deviance, deviance(nb4)), 2))## [1] "Estimated overdispersion 187.463666136172"
## [1] "RMSE: 8253.89918358645"
## [1] "AIC: 18080.4893864393"
## [1] "Null deviance: 1739076.35" "Residual deviance: 16847.74"
Predictive accuracy of the Negative Binomial model
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.0583333333333333"
Negative Binomial with Elapsed time, Age and Departments as pedictors
nb5bis <- glm.nb(`New cases/day` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.`+`Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas`+`Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`, data=data1[1:120, ])
plot(nb5bis, which=1)# Calculating overdispersion n=120 k=19 n-k=101
stdres2 <- rstandard(nb5bis)
print(paste("Estimated overdispersion", sum(stdres2^2)/101))
nb5bis.pred <- predict(nb5bis, newdata = data1[120:126, ], type = "response")
paste("RMSE:", sqrt(mean((nb5bis.pred - data1$`New cases/day`[120:126])^2)))
#paste("MSE:", mean(nb5bis$residuals^2))
paste("AIC:", nb5bis$aic)
paste(c("Null deviance: ", "Residual deviance:"),
round(c(nb5bis$null.deviance, deviance(nb5bis)), 2))## [1] "Estimated overdispersion 1.6253849069091"
## [1] "RMSE: 1379.56664915406"
## [1] "AIC: 1435.1440399102"
## [1] "Null deviance: 1416.29" "Residual deviance: 146.46"
Predictive accuracy of the NB model for New cases/day
Predicting with a \(95\%\) confidence interval
## [1] "Frequency of coverage: 0.575"
Autocorrelation to compare Negative Binomial models
We generated 1000 samples from each of the four Negative Binomial models and calculated the autocorrelation and compared against the autocorrelation of our original sample.
Autocorrelation for the models when the response variable is Cumulative Cases
#nb1
rnb1 <- simulate(nb1,nsim=10000)
mat1 <- data.matrix(rnb1)
acfnb1 <- lapply(split(mat1,col(mat1)), function(ts) acf(ts, lag.max=1,plot=F))
Neg.Bin_1 <- c()
for(i in 1:10000){ Neg.Bin_1 <- c(Neg.Bin_1, acfnb1[[c(i,1,2)]])}
#nb2
rnb2 <- simulate(nb2,nsim=10000)
mat2 <- data.matrix(rnb2)
acfnb2 <- lapply(split(mat2,col(mat2)), function(ts) acf(ts, lag.max=1,plot=F))
Neg.Bin_2 <- c()
for(i in 1:10000){ Neg.Bin_2 <- c(Neg.Bin_2, acfnb2[[c(i,1,2)]])}
#nb3
rnb3 <- simulate(nb3,nsim=10000)
mat3 <- data.matrix(rnb3)
acfnb3 <- lapply(split(mat3,col(mat3)), function(ts) acf(ts, lag.max=1,plot=F))
Neg.Bin_3 <- c()
for(i in 1:10000){ Neg.Bin_3 <- c(Neg.Bin_3, acfnb3[[c(i,1,2)]])}
#nb4
rnb4 <- simulate(nb4,nsim=10000)
mat4 <- data.matrix(rnb4)
acfnb4 <- lapply(split(mat4,col(mat4)), function(ts) acf(ts, lag.max=1,plot=F))
Neg.Bin_4 <- c()
for(i in 1:10000){ Neg.Bin_4 <- c(Neg.Bin_4, acfnb4[[c(i,1,2)]])}
#autocorrelation dataframe
autocor <- data.frame(Neg.Bin_1,Neg.Bin_2,Neg.Bin_3,Neg.Bin_4)
x1 <- melt(autocor)
#autocorrelation of original sample
#tmp <- acf(data1$`Cumulative cases`,lag.max=1,plot=F)
#hline1 <- as.numeric((unlist(tmp[1]))[1])
#boxplot
theme_set(theme_gray())
q<-ggplot(x1, aes(x=variable, y=value, fill=variable)) +
geom_boxplot()+
labs(title="Sample Autocorrelation", x="Models", y="Autocorrelation",fill='Models')+
scale_fill_manual(values=c("#FFFF00", "#FF66FF", "#00CCFF", "green"))
q + geom_hline(aes(yintercept=hline1,linetype="Autocorrelation"),color="red",size=1)+
scale_linetype_manual(name = "Original sample",values = c(1, 1))Autocorrelation for the model when the response variable is New cases/day
#Neg.Bin_A -> This is the model nb5bis. I am changing the name to make easier to understand the presentation
rnbA <- simulate(nb5bis,nsim=10000)
matrixnbA <- data.matrix(rnbA)
acfnbA <- lapply(split(matrixnbA,col(matrixnbA)), function(ts) acf(ts, lag.max=1,plot=F))
Neg.Bin_A <- c()
for(i in 1:10000){ Neg.Bin_A <- c(Neg.Bin_A, acfnbA[[c(i,1,2)]])}
#autocorrelation dataframe
autocor <- data.frame(Neg.Bin_A)
library(reshape2)
x1 <- melt(autocor)
#autocorrelation of original sample
tmp2 <- acf(cases$`New cases/day`,lag.max=1,plot=F)
hline2 <- as.numeric((unlist(tmp[1]))[1])
#boxplot
theme_set(theme_gray())
p<-ggplot(x1, aes(x=variable, y=value, fill=variable)) +
geom_boxplot()+
labs(title="Sample Autocorrelation", x="Models", y="Autocorrelation",fill='Models')+
scale_fill_manual(values=c("green"))
p + geom_hline(aes(yintercept=hline2,linetype="Autocorrelation"),color="red",size=1)+
scale_linetype_manual(name = "Original sample",values = c(1, 1))Applying ANOVA to compare the negative binomial models
## Likelihood ratio tests of Negative Binomial Models
##
## Response: Cumulative cases
## Model
## 1 Elapsed time
## 2 `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`
## 3 `Elapsed time` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima`
## 4 `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + \n `Departamento o Distrito_Tolima`
## theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi)
## 1 11252781 118 -21905.18
## 2 12919543 113 -20629.27 1 vs 2 5 1275.907 0
## 3 13821345 107 -19242.25 2 vs 3 6 1387.022 0
## 4 13728064 102 -18042.49 3 vs 4 5 1199.758 0
The Bayesian approach
Poisson regression
As a first attempt, we fit a simple Poisson regression:
\[ ln(\lambda_i) = \alpha + \beta\cdot elapsed\_time_i \\ y_i \sim \mathcal{Poisson}(\lambda_i) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta \sim \mathcal{N}(0.25,1) \]
with \(i = 1,\dots,134\), being \(134\) the number of rows of our dataset, and \(y_i\) represents the number of cases.
For what concerns the stan program, we used the function poisson_log_rng to describe the distribution of \(y_i\), namely the number of cases each day and the function poisson_log_lpmf to specify the likelihood.
Posterior predictive check
y_rep <- as.matrix(fit1, pars="y_rep")
ppc_dens_overlay(y = model.data$cases, y_rep[1:200,]) +
coord_cartesian(xlim = c(-1, 7000))The fit is not satisfactory, it is probably due to overdispersion, we can check the residuals to confirm this hypothesis.
Residual check
#in this way we check the standardized residuals
mean_y_rep <- colMeans(y_rep)
std_residual <- (model.data$cases - mean_y_rep) / sqrt(mean_y_rep)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)The variance of the residuals increases as the predicted value increase. The standardized residuals should have mean 0 and standard deviation 1 (hence the lines at \(+2\) and \(-2\) indicates approximate \(95\%\) error bounds).
The plot of the standardized residuals indicates a large amount of overdispersion.
Classically the problem of having overdispersed data is solved using the negative binomial model instead of the Poisson’s one.
Negative Binomial model
We try to improve the previous model using the Negative Binomial model:
\[ ln(\lambda_i) = \alpha + \beta\cdot elapsed\_time_i \\ y_i \sim \mathcal{Negative Binomial}(\lambda_i, \phi) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta \sim \mathcal{N}(0.25,1) \]
Where the parameter \(\phi\) is called precision and it is such that:
\[ E[y_i] = \lambda_i \\ Var[y_i] = \lambda_i + \frac{\lambda_i^2}{\phi} \]
again \(i=1,\dots,134\). As \(\phi \rightarrow \infty\) the negative binomial approaches the Poisson distribution.
The stan function that we use here are neg_binomial_2_log_rng to specify the distribution of \(y_i\) and the function neg_binomial_2_log_lpmf for the likelihood.
Posterior predictive check
samples_NB <- rstan::extract(fit2)
y_rep <- samples_NB$y_rep
ppc_dens_overlay(y = model.data$cases, y_rep[1:200,]) +
coord_cartesian(xlim = c(-1, 6000))Residual check
mean_inv_phi <- mean(samples_NB$inv_phi)
mean_y_rep <- colMeans(y_rep)
std_residual <- (model.data$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)The situation is better now, but still we have too many residuals outside the \(95\%\) interval.
Accuracy across departments
ppc_stat_grouped(
y = model.data$cases,
yrep = y_rep,
group = cases_dep$Department,
stat = "mean",
binwidth = 0.2
)We should take into account the differences across departments.
Multilevel Negative Binomial regression
We try to fit the following model, which also includes Age as covariat:
\[ ln(\lambda_i) = \alpha + \beta_{time}\cdot elapsed\_time_i + \beta_{age}\cdot age \\ y_i \sim \mathcal{Negative Binomial}(\lambda_i, \phi) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta_{time} \sim \mathcal{N}(0.5,1) \\ \beta_{age} \sim \mathcal{N}(0,1) \]
Posterior predictive check
samples_NB2 <- rstan::extract(fit3)
y_rep <- samples_NB2$y_rep
ppc_dens_overlay(y = model.data2$cases, y_rep[1:200,]) +
coord_cartesian(xlim = c(-1, 6000))Residual check
mean_inv_phi <- mean(samples_NB2$inv_phi)
mean_y_rep <- colMeans(y_rep)
std_residual <- (model.data2$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)Accuracy across departments
Hierarchical model
In order to improve the fit, we fit a model with department-specific intercept term.
So the varying intercept model that we take into account is now:
\[ ln(\lambda_{i,d}) = \alpha_d + + \beta_{time}\cdot elapsed\_time_i + \beta_{age}\cdot age_i\\ \alpha_d \sim \mathcal{N}(\mu + \beta_{pop}\cdot pop_d + \beta_{sur}\cdot surface_d + \beta_{dens} \cdot density_d, \sigma_{\alpha})\\ y_i \sim \mathcal{Negative Binomial}(\lambda_{i,d}, \phi) \]
The priors used for the above model are the following:
\[ \beta_{time} \sim \mathcal{N}(0.5,1) \\ \beta_{age} \sim \mathcal{N}(0,1) \\ \psi \sim \mathcal{N}(0,1) \]
being \(\psi = [\beta_{pop}, \beta_{sur}, \beta_{dens}]\).
New dataset
We added the following covariats into the dataset:
People: millions of inhabitants for each region;Surface: \(km^3\), extent of each region;Density: \(\frac{people}{km^2}\), density of the population in each region.
The model is:
Posterior predictive check
samples_hier <- rstan::extract(fit.4)
y_rep <- samples_hier$y_rep
ppc_dens_overlay(y = data.hier.NB.complete$cases, y_rep[1:200,]) +
coord_cartesian(xlim = c(-1, 6000))Residual check
mean_inv_phi <- mean(samples_hier$inv_phi)
mean_y_rep <- colMeans(y_rep)
std_residual <- (data.hier.NB.complete$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)Very few points are now outside the \(95\%\) confidence interval.
Accuracy across departments
ppc_stat_grouped(
y = data.hier.NB.complete$cases,
yrep = y_rep,
group = cases_dep$Department,
stat = "mean",
binwidth = 0.2
)We can clearly see that the accuracy across the departments has significantly increased with respect to the previous models.
LOOIC
The Leave-One-Out cross validation is a method for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulation of the parameters values.
Plot the looic to compare models:
loo.all.deps<-c(loo.model.Poisson[3], loo.model.NB[3], loo.model.NB2[3], loo.model.NB.hier[3])
sort.loo.all.deps<- sort.int(loo.all.deps, index.return = TRUE)$x
par(xaxt="n")
plot(sort.loo.all.deps, type="b", xlab="", ylab="LOOIC", main="Model comparison")
par(xaxt="s")
axis(1, c(1:4), c("Poisson", "NB-sl", "NB-ml",
"hier")[sort.int(loo.all.deps,
index.return = TRUE)$ix],
las=2)